Skip to content

fix unhandler error -172 on entry committable causing the server to crash#2731

Open
SylvainSenechal wants to merge 1 commit intodevelopment/9.4from
bugfix/BB-758
Open

fix unhandler error -172 on entry committable causing the server to crash#2731
SylvainSenechal wants to merge 1 commit intodevelopment/9.4from
bugfix/BB-758

Conversation

@SylvainSenechal
Copy link
Copy Markdown
Contributor

Issue: BB-758

Seen in zenko ci :
image

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 8, 2026

Hello sylvainsenechal,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Available options
name description privileged authored
/after_pull_request Wait for the given pull request id to be merged before continuing with the current one.
/bypass_author_approval Bypass the pull request author's approval
/bypass_build_status Bypass the build and test status
/bypass_commit_size Bypass the check on the size of the changeset TBA
/bypass_incompatible_branch Bypass the check on the source branch prefix
/bypass_jira_check Bypass the Jira issue check
/bypass_peer_approval Bypass the pull request peers' approval
/bypass_leader_approval Bypass the pull request leaders' approval
/approve Instruct Bert-E that the author has approved the pull request. ✍️
/create_pull_requests Allow the creation of integration pull requests.
/create_integration_branches Allow the creation of integration branches.
/no_octopus Prevent Wall-E from doing any octopus merge and use multiple consecutive merge instead
/unanimity Change review acceptance criteria from one reviewer at least to all reviewers
/wait Instruct Bert-E not to run until further notice.
Available commands
name description privileged
/help Print Bert-E's manual in the pull request.
/status Print Bert-E's current status in the pull request TBA
/clear Remove all comments from Bert-E from the history TBA
/retry Re-start a fresh build TBA
/build Re-start a fresh build TBA
/force_reset Delete integration branches & pull requests, and restart merge process from the beginning.
/reset Try to remove integration branches unless there are commits on them which do not appear on the source branch.

Status report is not available.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 8, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.74%. Comparing base (fc1e238) to head (b988ab1).
⚠️ Report is 5 commits behind head on development/9.4.

Additional details and impacted files

Impacted file tree graph

Files with missing lines Coverage Δ
lib/BackbeatConsumer.js 93.86% <100.00%> (-0.24%) ⬇️

... and 3 files with indirect coverage changes

Components Coverage Δ
Bucket Notification 80.37% <ø> (ø)
Core Library 81.20% <100.00%> (+0.47%) ⬆️
Ingestion 70.53% <ø> (ø)
Lifecycle 79.10% <ø> (ø)
Oplog Populator 85.83% <ø> (ø)
Replication 59.61% <ø> (-0.04%) ⬇️
Bucket Scanner 85.76% <ø> (ø)
@@                 Coverage Diff                 @@
##           development/9.4    #2731      +/-   ##
===================================================
+ Coverage            74.55%   74.74%   +0.18%     
===================================================
  Files                  200      200              
  Lines                13623    13625       +2     
===================================================
+ Hits                 10157    10184      +27     
+ Misses                3456     3431      -25     
  Partials                10       10              
Flag Coverage Δ
api:retry 9.13% <0.00%> (-0.01%) ⬇️
api:routes 8.95% <0.00%> (-0.01%) ⬇️
bucket-scanner 85.76% <ø> (ø)
ft_test:queuepopulator 10.90% <0.00%> (+0.89%) ⬆️
ingestion 12.48% <0.00%> (-0.01%) ⬇️
lib 7.62% <100.00%> (+0.01%) ⬆️
lifecycle 18.73% <75.00%> (+0.01%) ⬆️
notification 1.02% <0.00%> (-0.01%) ⬇️
oplogPopulator 0.14% <0.00%> (-0.01%) ⬇️
replication 18.47% <75.00%> (+<0.01%) ⬆️
unit 51.20% <100.00%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@claude
Copy link
Copy Markdown

claude bot commented Apr 8, 2026

LGTM

Review by Claude Code

@bert-e
Copy link
Copy Markdown
Contributor

bert-e commented Apr 8, 2026

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • 2 peers

@SylvainSenechal SylvainSenechal requested review from a team, benzekrimaha and maeldonn April 8, 2026 18:51
@SylvainSenechal SylvainSenechal marked this pull request as ready for review April 8, 2026 18:51
Comment thread lib/BackbeatConsumer.js Outdated
try {
this._consumer.offsetsStore([{ topic, partition, offset: committableOffset }]);
} catch (e) {
// ERR__STATE (-172) means the consumer is not in a valid state
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this stuff seems to happen on rebalance or weird kafka status.

  • Before, the pod would crash, the offset wouldn't be commited, and another one pod would reconsume and re process it
  • Now : the pod doesn't crash, but the offset is still not commited. and reconsumed later by another pod, or maybe the same pod since now it doesn't crash anymore

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we know when this happens?

in order to properly handle rebalance, we actually need to commit before the rebalance (hence the logic implemented here). If there are some cases where it does not work, we need to understand why/how, and not only mask the problem....

(there will always be corner cases where we cannot commit and will re-play the message, that is fine... but we need to understand why/when it happens, to make sure this is really the unavoidable corner case and not a deeper issue)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could the root cause be the same as 11fe6ee ?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really know when, and more importantly why this happen :/
Maybe it could be interesting to wait for the pr from Thomas to be merged and run in zenko ci to see if this issue still exists

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I repulled the zenko ci logs from a successful run and asked claude to analyze and pay attention to the timing of the logs.

What was found was that this is happening during a rebalance, we first have one pod handling all the partitions, then the operator deployed another pod, and and when that new pod joined the Kafka consumer group, a rebalance was triggered.
The old pod had 5 partitions revoked and was re-assigned only 2 of them. The problem is: some tasks that were already in-flight finished processing after the revoke happened. When their callback tried to call offsetsStore() to mark the offset as ready to commit, librdkafka threw ERR__STATE because those partitions were no longer assigned to this pod so it crashed

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so the problem is actually much deeper: the whole logic here is done to delay the whole rebalance until we commit :

  • which we normally have the time (not sure about the numbers, but rebalance can last a significant time - unless some kafka params changed)
  • unless we have so-called "slow tasks", i.e. messages which take more than the time limit to process. So there is an issue in the processing: either timeouts do not match our target (and esp. on a possibly overloaded CI), or we really hit some corner case where processing is stuck

Good thing is it happens on CI: so whatever the reason, we have a way to reproduce, somehow...

Comment thread lib/BackbeatConsumer.js Outdated
// state transition) — treat as non-fatal and log at debug level.
const logger = this._consumer.isConnected() &&
e.code !== kafka.CODES.ERRORS.ERR__STATE
? this._log.error : this._log.debug;
Copy link
Copy Markdown
Contributor

@francoisferrand francoisferrand Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is never a debug log : we don't expect this to happen at all... So always ouptutting an error log may be appropriate

Comment thread lib/BackbeatConsumer.js
this._consumer.offsetsStore([{ topic, partition,
offset: committableOffset }]);
if (committableOffset !== null && !this.isPaused() && this._consumer.isConnected()) {
try {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since you added the isConnected() guard, do we also need to catch the exception?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could still have cases where is connected returns true, but then inside offsetStore something wrong happens 🤔

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we ? this is javascript, so single threaded...

Comment thread tests/unit/backbeatConsumer.js Outdated
Copy link
Copy Markdown
Contributor

@francoisferrand francoisferrand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like there is a deeper issue, we should investigate more to make sure we don't just hide another bug

@scality scality deleted a comment from bert-e Apr 14, 2026
@claude
Copy link
Copy Markdown

claude bot commented Apr 14, 2026

LGTM

Review by Claude Code

@SylvainSenechal
Copy link
Copy Markdown
Contributor Author

seems like there is a deeper issue, we should investigate more to make sure we don't just hide another bug

Ok considering you other comments, I will try to setup a functional test to see if we can first reproduce the errors, if we can reproduce it, we will be able to work on a fix and really verify that it works

Comment thread lib/BackbeatConsumer.js Outdated
Comment thread lib/BackbeatConsumer.js
@claude
Copy link
Copy Markdown

claude bot commented Apr 16, 2026

  • Guard order in onEntryCommittable checks isPaused() before isConnected(), but isPaused() calls subscription() which can throw on a disconnected consumer. Swap to isConnected() && !isPaused() to match the existing pattern at line 496.
    - Reorder to: this._consumer.isConnected() && !this.isPaused()
    - Error log in catch block is missing groupId, making it harder to triage in multi-consumer-group deployments.
    - Add groupId: this._groupId to the error log object

    Review by Claude Code

@claude
Copy link
Copy Markdown

claude bot commented Apr 16, 2026

LGTM

Review by Claude Code

@SylvainSenechal
Copy link
Copy Markdown
Contributor Author

@francoisferrand
I added a test that setup conditions to reproduces this err -172 that is now catched with the try catch. I also run this test locally with and without the try catch that I added and confirmed that this error is reproductible when a rebalance happens

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants